36 research outputs found

    Code Complexity in Introductory Programming Courses

    Get PDF
    Instructors of introductory programming courses would benefit from having a metric for evaluating the sophistication of student code. Since introductory programming courses pack a wide spectrum of topics in a short timeframe, student code changes quickly, raising questions of whether existing software complexity metrics effectively reflect student growth as reflected in their code. We investigate code produced by over 800 students in two different Python-based CS1 courses to determine if frequently used code quality and complexity metrics (e.g., cyclomatic and Halstead complexities) or metrics based on length and syntactic complexity are more effective as a heuristic for gauging students' progress through a course. We conclude that the traditional metrics do not correlate well with time passed in the course. In contrast, metrics based on syntactic complexity and solution size correlate strongly with time in the course, suggesting that they may be more appropriate for evaluating how student code evolves in a course context.Instructors of introductory programming courses would benefit from having a metric for evaluating the sophistication of student code. Since introductory programming courses pack a wide spectrum of topics in a short timeframe, student code changes quickly, raising questions of whether existing software complexity metrics effectively reflect student growth as reflected in their code. We investigate code produced by over 800 students in two different Python-based CS1 courses to determine if frequently used code quality and complexity metrics (e.g., cyclomatic and Halstead complexities) or metrics based on length and syntactic complexity are more effective as a heuristic for gauging students' progress through a course. We conclude that the traditional metrics do not correlate well with time passed in the course. In contrast, metrics based on syntactic complexity and solution size correlate strongly with time in the course, suggesting that they may be more appropriate for evaluating how student code evolves in a course context.Peer reviewe

    Automated assessment of programming assignments : visual feedback, assignment mobility, and assessment of students' testing skills

    Get PDF
    The main objective of this thesis is to improve the automated assessment of programming assignments from the perspective of assessment tool developers. We have developed visual feedback on functionality of students' programs and explored methods to control the level of detail in visual feedback. We have found that visual feedback does not require major changes to existing assessment platforms. Most modern platforms are web based, creating an opportunity to describe visualizations in JavaScript and HTML embedded into textual feedback. Our preliminary results on the effectiveness of automatic visual feedback indicate that students perform equally well with visual and textual feedback. However, visual feedback based on automatically extracted object graphs can take less time to prepare than textual feedback of good quality. We have also developed programming assignments that are easier to port from one server environment to another by performing assessment on the client-side. This not only makes it easier to use the same assignments in different server environments but also removes the need for sandboxing the execution of students' programs. The approach will likely become more important in the future together with interactive study materials becoming more popular. Client-side assessment is more suitable for self-studying material than for grading because assessment results sent by a client are often too easy to falsify. Testing is an important part of programming and automated assessment should also cover students' self-written tests. We have analyzed how students behave when they are rewarded for structural test coverage (e.g. line coverage) and found that this can lead students to write tests with good coverage but with poor ability to detect faulty programs. Mutation analysis, where a large number of (faulty) programs are automatically derived from the program under test, turns out to be an effective way to detect tests otherwise fooling our assessment systems. Applying mutation analysis directly for grading is problematic because some of the derived programs are equivalent with the original and some assignments or solution strategies generate more equivalent mutants than others

    Preventing Keystroke Based Identification in Open Data Sets

    Get PDF
    Large-scale courses such as Massive Online Open Courses (MOOCs) can be a great data source for researchers. Ideally, the data gathered on such courses should be openly available to all researchers. Studies could be easily replicated and novel studies on existing data could be conducted. However, very fine-grained data such as source code snapshots can contain hidden identifiers. For example, distinct typing patterns that identify individuals can be extracted from such data. Hence, simply removing explicit identifiers such as names and student numbers is not sufficient to protect the privacy of the users who have supplied the data. At the same time, removing all keystroke information would decrease the value of the shared data significantly. In this work, we study how keystroke data from a programming context could be modified to prevent keystroke latency based identification whilst still retaining information that can be used to e.g. infer programming experience. We investigate the degree of anonymization required to render identification of students based on their typing patterns unreliable. Then, we study whether the modified keystroke data can still be used to infer the programming experience of the students as a case study of whether the anonymized typing patterns have retained at least some informative value. We show that it is possible to modify data so that keystroke latency based identification is no longer accurate, but the programming experience of the students can still be inferred, i.e. the data still has value to researchers. In a broader context, our results indicate that information and anonymity are not necessarily mutually exclusive.Peer reviewe

    Oppimisanalytiikka digitaalisessa ympäristössä

    Get PDF

    A systematic literature review of capstone courses in software engineering

    Get PDF
    Context: Tertiary education institutions aim to prepare their computer science and software engineering students for working life. While much of the technical principles are covered in lower-level courses, team-based capstone courses are a common way to provide students with hands-on experience and teach soft skills. Objective: This paper explores the characteristics of project-based software engineering capstone courses presented in the literature. The goal of this work is to understand the pros and cons of different approaches by synthesising the various aspects of software engineering capstone courses and related experiences. Method: In a systematic literature review for 2007–2022, we identified 127 articles describing real-world capstone courses. These articles were analysed based on their presented course characteristics and the reported course outcomes. Results: The characteristics were synthesised into a taxonomy consisting of duration, team sizes, client and project sources, project implementation, and student assessment. We found out that capstone courses generally last one semester and divide students into groups of 4–5 where they work on a project for a client. For a slight majority of courses, the clients are external to the course staff and students are often expected to produce a proof-of-concept level software product as the main end deliverable. The courses generally include various forms of student assessment both during and at the end of the course. Conclusions: This paper provides researchers and educators with a classification of characteristics of software engineering capstone courses based on previous research. We also further synthesise insights on the reported course outcomes. Our review study aims to help educators to identify various ways of organising capstones and effectively plan and deliver their own capstone courses. The characterisation also helps researchers to conduct further studies on software engineering capstones.Context: Tertiary education institutions aim to prepare their computer science and software engineering students for working life. While much of the technical principles are covered in lower-level courses, team-based capstone courses are a common way to provide students with hands-on experience and teach soft skills. Objective: This paper explores the characteristics of project-based software engineering capstone courses presented in the literature. The goal of this work is to understand the pros and cons of different approaches by synthesising the various aspects of software engineering capstone courses and related experiences. Method: In a systematic literature review for 2007–2022, we identified 127 articles describing real-world capstone courses. These articles were analysed based on their presented course characteristics and the reported course outcomes. Results: The characteristics were synthesised into a taxonomy consisting of duration, team sizes, client and project sources, project implementation, and student assessment. We found out that capstone courses generally last one semester and divide students into groups of 4–5 where they work on a project for a client. For a slight majority of courses, the clients are external to the course staff and students are often expected to produce a proof-of-concept level software product as the main end deliverable. The courses generally include various forms of student assessment both during and at the end of the course. Conclusions: This paper provides researchers and educators with a classification of characteristics of software engineering capstone courses based on previous research. We also further synthesise insights on the reported course outcomes. Our review study aims to help educators to identify various ways of organising capstones and effectively plan and deliver their own capstone courses. The characterisation also helps researchers to conduct further studies on software engineering capstones.Peer reviewe

    To Opt In or To Opt Out? Predicting Student Preference for Learning Analytics-Based Formative Feedback

    Get PDF
    Teachers’ work is increasingly augmented with intelligent tools that extend their pedagogical abilities. While these tools may have positive effects, they require use of students’ personal data, and more research into student preferences regarding these tools is needed. In this study, we investigated how learning strategies and study engagement are related to students’ willingness to share data with learning analytics (LA) applications and whether these factors predict students’ opt-in for LA-based formative feedback. Students (N=158) on a self-paced online course set their personal completion goals for the course and chose to opt in for or opt out of personalized feedback based on their progress toward their goal. We collected self-reported measures regarding learning strategies, study engagement, and willingness to share data for learning analytics through a survey (N=73). Using a regularized partial correlation network, we found that although willingness to share data was weakly connected to different aspects of learning strategies and study engagement, students with lower self-efficacy were more hesitant to share data about their performance. Furthermore, we could not sufficiently predict students’ opt-in decisions based on their learning strategies, study engagement, or willingness to share data using logistic regression. Our findings underline the privacy paradox in online privacy behavior: theoretical unwillingness to share personal data does not necessarily lead to opting out of interventions that require the disclosure of personal data. Future research should look into why students opt in for or opt out of learning analytics interventions.Peer reviewe

    CodeProcess Charts: Visualizing the Process of Writing Code

    Get PDF
    Instructors of computer programming courses evaluate student progress on code submissions, exams, and other activities. The evaluation of code submissions is typically a summative assessment that gives very little insight into the process the student used when designing and writing the code. Thus, a tool that offers instructors a view into how students actually write their code could have broad impacts on assessment, intervention, instructional design, and plagiarism detection. In this article we propose an interactive software tool with a novel visualization that includes both static and dynamic views of the process that students take to complete computer programming assignments. We report results of an exploratory think-aloud study in which instructors offer thoughts as to the utility and potential of the tool. In the think-aloud study, we observed that the instructors easily identified multiple coding strategies (or the lack of thereof), were able to recognize plagiarism, and noticed a clear need for wider dissemination of tools for visualizing the programming process.Peer reviewe

    Persistence of Time Management Behavior of Students and Its Relationship with Performance in Software Projects

    Get PDF
    Teachers often preach for their students to start working on assignments early. There is even a fair amount of scientific evidence that starting early is beneficial for learning. In this work, we investigate students’ time management behavior in a second-year project-based software engineering course. In the course, students work on a software project in small groups of four to six students. We study time management from multiple angles. Firstly, we conduct an exploratory factor analysis and study how different time management related behavioral metrics are related to one another, for example, whether individual students’ time management practices in the second-year group project-based course are similar to their earlier time management practices in first-year courses where students work on assignments individually. Understanding how students’ previous time management behavior is manifested in later project-based courses would be beneficial when designing project-based education. Secondly, we study whether students’ time management practices affect the peer-review scores they get from their group members. Lastly, we explore how time management affects course performance. Our findings suggest that time management behavior, even from courses taken in the past, can be used to predict how students perform in future courses.Peer reviewe
    corecore